Back

Wellcome Open Research

F1000 Research Ltd

Preprints posted in the last 7 days, ranked by how well they match Wellcome Open Research's content profile, based on 57 papers previously published here. The average preprint has a 0.07% match score for this journal, so anything above that is already an above-average fit.

1
Harmonising UK primary care prescription records for research: A case study in the UK Biobank

Ytsma, C. R.; Torralbo, A.; Fitzpatrick, N. K.; Pietzner, M.; Louloudis, I.; Nguyen, D.; Ansarey, S.; Denaxas, S.

2026-04-22 health informatics 10.64898/2026.04.21.26351274 medRxiv
Top 0.2%
4.8%
Show abstract

Objective The aim of this study was to develop and validate an automated, scalable framework to harmonise fragmented UK primary care prescription records into a research-ready dataset by mapping four diverse medical ontologies to a unified, historically comprehensive reference standard. Materials and Methods We used raw prescription records for consented participants in the UK Biobank, in which participants are uniquely characterized by multiple data modalities. Primary care data were preprocessed by selecting one drug code if multiple were recorded, cleaning codes to match reference presentations, expanding code granularity based on drug descriptions, and updating outdated codes to a single reference version. Harmonisation entailed mapping British National Formulary (BNF) and Read2 codes to dm+d, the universal NHS standard vocabulary for uniquely identifying and prescribing medicines. Harmonised dm+d records were then homogenised to a single concept granularity, the Virtual Medicinal Product (VMP). We validated our methods by creating medication profiles mapping contemporary drug prescribing patterns in 312 physical and mental health conditions. Results We preprocessed 57,659,844 records (100%) from 221,868 participants (100%). Of those, 48,950 records were dropped due to lack of drug code. 7,357,572 records (13%) used multiple ontologies. Most (76%) records were encoded in BNF and most had the code granularity expanded via the drug description (N=28,034,282; 49%). 41,244,315 records (72%) were harmonised to dm+d and 99.98% of these were converted to VMP as a homogeneous dataset. Across 312 diseases, we identified 23,352 disease-drug associations with 237 medications (represented as BNF subparagraphs) that survived statistical correction of which most resembled drug - indication pairs. Conclusion Our methodology converts highly fragmented and raw prescription records with inconsistent data quality into a streamlined, enriched dataset at a single reference, version, and granularity of information. Harmonised prescription records can be easily utilised by researchers to perform large-scale analyses in research.

2
Detection of iron and zinc in human skin using non-invasive Raman spectrophotometer - A validation study among children under five years of age living in sub-Saharan Africa

Abidha, C. A.; Amevor, B. S.; Mank, I.; Oguso, J.; Mbata, M.; Coulibaly, B.; Denkinger, C. M.; Sorgho, R.; Sie, A.; Muok, E. M. O.; Danquah, I.

2026-04-24 public and global health 10.64898/2026.04.22.26351546 medRxiv
Top 0.2%
4.4%
Show abstract

Background: Sub-Saharan Africa (SSA) still experiences a high burden of micronutrient deficiencies. For monitoring of micronutrient status among young children in SSA, non-invasive alternatives to blood-based biomarkers are desirable. Handheld Raman spectrophotometry appears to offer this alternative to quantify intracellular stores of micronutrients. In rural Burkina Faso and Kenya, we validated the Cell-/SO-Check device (ZellCheck(R)) against conventional laboratory-based methods. Methods: For this validation study, we recruited children aged [≥]24 months attending routine clinics within the Health and Demographic Surveillance Systems (HDSS) in Siaya and Nouna. Anthropometric measurements and venous blood samples were taken. Plasma ferritin, soluble transferrin receptor (sTfR) and C-reactive protein (CRP) were measured by ELISA, and plasma zinc by atom absorption. The spectrometer was used to quantify zinc and iron. For continuous outcomes, we generated Bland Altman plots and calculated bias and limits of agreement (LoA). For binary outcomes, we produced Receiver Operator Characteristic (ROC) areas under the curve (AUC), and estimated sensitivity, specificity and predictive values. Results: We analysed data of 48 children from Burkina Faso and 54 children from Kenya (male: 53%; age range: 24-66 months). According to spectrophotometry, the proportions of iron deficiency and zinc deficiency were 16.7% and 25.5%, respectively. The median concentrations were for ferritin 24.0 {micro}g/L (range: 2.0-330.0), for sTfR 5.7 mg/L (2.8-51.0), and for zinc 9.9 {micro}mol/L (5.2-25.0). The corresponding bias for iron levels by spectrophotometry was 42.4 with LoA: -18.7, 103.6. The bias for zinc levels was 7.5 with LoA: -49.3, 64.2. For the classification of deficiency, the ROC-AUC, sensitivity, and specificity for spectrophotometry vs. biomarker-based diagnosis were for iron deficiency 0.62, 68% and 55%, respectively, and for zinc deficiency 0.55, 33% and 91%, respectively. Conclusions: The Cell-/SO-Check device may be used to rank children in population-based studies in SSA according to their zinc status, but not iron status. The method should not replace the standard laboratory measurements for clinical diagnoses of zinc and iron deficiencies.

3
Ethnic inequalities in respiratory virus epidemics in England: a mathematical modelling study

Robert, A.; Goodfellow, L.; Pellis, L.; van Leeuwen, E.; Edmunds, W. J.; Quilty, B. J.; van Zandvoort, K.; Eggo, R. M.

2026-04-21 infectious diseases 10.64898/2026.04.18.26350858 medRxiv
Top 0.2%
3.7%
Show abstract

BackgroundIn England, the burden of respiratory infections varies by ethnicity, contributing to health inequalities, but the role of additional demographic factors remains underexplored. We quantified how differences in social mixing and demographic characteristics between ethnic groups cause inequalities in transmission dynamics. MethodsWe analysed the association between the ethnicity and the number of contacts of 12,484 participants in the 2024-2025 Reconnect social contact survey, using a negative binomial regression model. We simulated respiratory pathogen epidemics using a compartmental model stratified by age, ethnicity, and contact levels, at a national level and in major cities in England. FindingsAfter adjusting for demographic variables, participants of Black and Mixed ethnicities had more contacts than those of White ethnicity (rate ratios (RR): 1.18 [95% Credible Interval (CI): 1.11-1.26], and 1.31 [95% CI: 1.14-1.52]). Participants of Asian ethnicity had fewer contacts (RR: 0.85 [95% CI: 0.79-0.91]). In national-level simulations, individuals of White ethnicity had the lowest attack rates due to demographic differences and mixing patterns. Local demographic structures changed simulated dynamics: attack rates in individuals of Black and Mixed ethnicities were approximately double those of White ethnicity in Birmingham, but less than 60% higher in Liverpool. InterpretationDemographic characteristics and mixing patterns create inequalities in transmission dynamics between ethnicities, while local demographic characteristics and pathogen infectiousness change the expected relative burden. To ensure mitigation strategies are effective and equitable, their evaluation must explicitly account for inequalities arising from local context. FundingMedical Research Council, National Institute for Health and Care Research, Wellcome Trust Research in context Evidence before this studyWe searched PubMed for population-based studies quantifying differences in respiratory infections between ethnic groups, up to 1 April 2026, with no language restrictions. Keywords included: (respiratory pathogens OR influenza OR COVID-19) AND (ethnic* OR race) AND (inequ*) AND (compartmental model OR incidence rate ratio OR hazard ratio). We excluded studies that focused on non-respiratory pathogens (e.g. looking at consequences of COVID-19 on incidence of other pathogens). A population-based cohort study showed that influenza infection risk was higher in South Asian, Black, and Mixed ethnic groups compared to White ethnicity in England. Another population-based cohort study highlighted that during the first wave of COVID-19 in England, the South Asian, Black, and Mixed ethnic groups were more likely to test positive and to be hospitalised than the White ethnic group. Census data in England showed that the distributions of age, household size, household income and employment status differed between ethnic groups, and the recent Reconnect social contact surveys highlighted the impact of each demographic factor on the participants number of contacts. Added value of this studyOur study shows that social contact patterns, mixing, and demographic structure all lead to unequal infection risk between ethnic groups in respiratory pathogen epidemics. Using the largest available social contact survey in England, we show that both the average number of contacts and the proportion of high-contact individuals varied by ethnic group, even after adjusting for participants demographics. These differences, together with mixing patterns and age structure, led to lower expected incidence among individuals of White ethnicity than in all other ethnic groups in simulated outbreaks. The level of inequality between ethnic groups changed when we used different values of pathogen transmissibility. Finally, as ethnic composition and population structure differ between cities in England, our results show differences in expected inequalities at a local level. Implications of all the available evidenceInequalities in infection risk between ethnic groups are context- and pathogen-dependent. They arise from both local population structure and contact patterns. Detailed information on mixing between groups and population structure is needed to accurately measure group-specific infection risk. These findings indicate that public health interventions based only on national-level estimates conceal regional variation in risk and may ultimately increase inequalities. Public health interventions need to be tailored to local contexts to be equitable and effective. Finally, our findings provide a foundation for understanding the progression from infection-risk inequalities to disparities in disease presentation and clinical outcomes.

4
Evolving concerns about the COVID-19 pandemic: A content analysis of free-text reports from the UK COVID-19 Public Experiences (COPE) study cohort over a two-year period

Phillips, R.; Wood, F.; Torrens-Burton, A.; Glennan, C.; Sellars, P.; Lowe, S.; Caffoor, A.; Hallingberg, B.; Gillespie, D.; Shepherd, V.; Poortinga, W.; Wahl-Jorgensen, K.; Williams, D.

2026-04-19 public and global health 10.64898/2026.04.16.26351013 medRxiv
Top 0.4%
2.7%
Show abstract

Objectives Concerns about COVID-19 were a key driver of infection-prevention behaviour during the pandemic. The aim of this study was to gain an in-depth longitudinal understanding of the type and frequency of concerns experienced throughout the first two years of the COVID-19 pandemic. Design Content analysis of qualitative descriptions provided in a prospective longitudinal online survey as part of the COVID-19 UK Public Experiences (COPE) Study. Method At baseline (March/April 2020), when the UK entered its first national lockdown, 11,113 adults completed the COPE survey. Follow-up surveys were conducted at 3, 12, 18 and 24 months. Participants were recruited via the HealthWise Wales research registry and social media. Baseline surveys collected demographic and health data, and all waves included an open-ended question about COVID-19 concerns. Content analysis was used to identify the type and frequency of concerns at each time point. Results A total of 41,564 open-text responses were coded into six categories: personal harm (n=16,353), harm to others (n=11,464), social/economic impact (n=6,433), preventing transmission (n=4,843), government/media (n=1,048), and general concerns (n=1,423). The proportion of respondents reporting any concern declined from 75.3% at baseline to 65.8% at 24 months. Over time, concerns about personal harm increased (baseline 41.8% vs. 24-months 52.7%) whereas concerns about harm to others decreased (baseline 48.5% vs. 24-months 28.6%). Concerns about harm were also expressed in relation to clinical vulnerability, lack of trust in government/media, and perceived lack of adherence by others. These were balanced against concerns about wider social and economic impacts of restrictions. Conclusions Public concerns about COVID-19 evolved substantially over the first two years of the pandemic, reflecting changing perceptions of risk and responsibility. Monitoring concerns longitudinally is vital to help guide effective communication and behavioural interventions during future pandemics.

5
A Biophysical Model of Human Colonic Motor Pattern Generation in Health and Disease

Anantha Krishnan, A.; Dinning, P. G.; Holland, M. A.

2026-04-20 biophysics 10.64898/2026.04.15.718795 medRxiv
Top 0.4%
2.7%
Show abstract

PurposeColonic motility disorders, including diarrhea-predominant irritable bowel syndrome and slow-transit constipation, impose a major clinical burden. Although high-resolution colonic manometry reveals characteristic spatiotemporal motor patterns, such as high-amplitude propagating contractions and cyclic motor pattern in healthy individuals, these patterns are often altered or absent in disease. Understanding how these patterns arise from underlying pacemaker, neural, and mechanical mechanisms is essential for improving treatment strategies. MethodsWe developed a biophysical whole-colon model that integrates an Interstitial Cells of Cajal-inspired oscillator network, enteric nervous system reflexes, a pressure-gated modulation element motivated by rectosigmoid brake behavior, and a nonlinear tube law describing colon wall mechanics. The model simulates spatiotemporal pressure patterns along the colon and allows systematic variation of physiological parameters associated with pacemaker activity, neural reflex control, and distal gating. ResultsA small set of parameters reproduces three illustrative motility patterns corresponding to healthy motility, diarrhea-predominant irritable bowel syndrome, and slow-transit constipation. The simulated pressure maps recapitulate key features observed in high-resolution manometry, including propagation direction, regional patterning of contractions, and case-specific changes in amplitude and coordination. Sensitivity analysis suggests that proximal excitation strength and waveform morphology strongly influence global motility metrics. ConclusionOur study presents a simple, biophysical framework for reproducing clinically observed colonic motor patterns and exploring their disruption in disease. More broadly, the model may help interpret clinical manometry in mechanistic terms and support hypothesis-driven in silico studies of colonic motility disorders.

6
Predicting Depressive Symptoms Among Reproductive-Aged Women in Bangladesh Using Bagging Ensemble Machine Learning on Imbalanced Bangladesh Demographic and Health Survey 2022 Data

Mahmud, S.; Akter, M. S.; Ahamed, B.; Rahman, A. E.; El Arifeen, S.; Hossain, A. T.

2026-04-23 public and global health 10.64898/2026.04.22.26351445 medRxiv
Top 0.5%
2.3%
Show abstract

Background Depressive symptoms among reproductive-aged women represent a major public health concern in low- and middle-income countries, yet systematic screening remains limited. In most population survey datasets, the low prevalence of depression results in severe class imbalance, which challenges conventional machine learning models. Therefore, we develop and evaluate a bagging-based ensemble machine learning framework to predict depressive symptoms among reproductive-aged women using highly imbalanced Bangladesh demographic and health survey (BDHS) 2022 data. Methods The sample comprised women aged 15-49 years drawn from BDHS 2022 data. Depressive symptoms were defined using the Patient Health Questionnaire (PHQ-9 [≥]10). Candidate predictors were drawn from sociodemographic, reproductive, nutritional, psychosocial, healthcare access, and environmental domains. Feature selection was performed using Elastic Net (EN), Random Forest (RF), and XGBoost model. Five classifiers (EN, RF, Support Vector Machine (SVM), K-nearest neighbors (KNN), and Gradient Boosting Machine (GBM)) were trained using both oversampling-based approaches and the proposed ensemble framework. Model performance was evaluated on an independent test set using accuracy, sensitivity, specificity, F1-score, and the normalized Matthews correlation coefficient (normMCC). Results Approximately 4.8% of women were identified with depressive symptoms. The proposed bagging ensemble framework consistently achieved more balanced predictive performance than oversampling-based models. Average normMCC improved from 0.540 (oversampling) to 0.557 (ensemble). RF and GBM ensembles demonstrated notable improvements in identifying depressive cases, while the EN ensemble achieved the highest overall performance and sensitivity. Threshold optimization yielded stable normMCC across models, indicating robust trade-offs between sensitivity and specificity. Conclusions Bagging-based ensemble learning provides a more robust and balanced approach than synthetic oversampling for predicting depressive symptoms in highly imbalanced population survey data. This approach has important implications for improving early identification and population-level mental health surveillance in resource-constrained settings.

7
Episia: An Open-Source Python Library for Epidemiological Surveillance, Modeling, and Biostatistics in Resource-Limited Settings

Ouedraogo, F. A. S.

2026-04-20 epidemiology 10.64898/2026.04.17.26350337 medRxiv
Top 0.7%
1.9%
Show abstract

Despite the evolution of epidemiological analysis and modeling tools, difficulties still remain, especially in developing countries, regarding the availability and use of these tools. Often expensive, requiring high technical expertise, demanding constant connectivity of several or sometimes even significant resources, these tools, although efficient, present a major gap with the operational realities of health districts. It is in this context that we introduce Episia, an open-source Python library designed and conceived to provide a framework to facilitate epidemiological analysis and modeling. It integrates a suite of compartmental epidemic models (SIR, SEIR, SEIRD) with a sensitivity analysis using the Monte Carlo method, a complete biostatistics suite validated against the OpenEpi reference standard, as well as a native DHIS2 client for automated data ingestion. Developed in Burkina Faso, it is optimized and aims not only to address these health challenges encountered in Africa but also remains a versatile tool for global health informatics.

8
Defining influenza epidemic zones through temporal clustering of global surveillance data

Hassell, N.; Marcenac, P.; Bationo, C. S.; Hirve, S.; Tempia, S.; Rolfes, M. A.; Duca, L. M.; Hammond, A.; Wijesinghe, P. R.; Heraud, J.-M.; Pereyaslov, D.; Zhang, W.; Kondor, R. J.; Azziz-Baumgartner, E.

2026-04-25 public and global health 10.64898/2026.04.17.26351048 medRxiv
Top 0.7%
1.9%
Show abstract

Introduction: Modeling when influenza epidemics typically occur can help countries optimize surveillance, time clinical and public health interventions, and reduce the burden of influenza. Methods: We used influenza virus detections reported during 2011-2024 by 180 countries to the Global Influenza Surveillance and Response System, excluding COVID-19 pandemic impacted years (2020-2023). We analyzed data by calendar year (week 1-52) or shifted year (week 30-29) time windows, based on when most influenza detections occurred in each country. For countries with sufficient data, we computed generalized additive models (GAMs) of each country's weekly influenza-positive tests to smooth and impute time series distributions. From these GAMs, we calculated each country's normalized weekly influenza burden. Country-specific normalized time series were grouped using hierarchical k-means clustering reducing the Euclidean distance between time series within clusters. We calculated cluster-specific GAMs to estimate average seasonal timing. Countries without sufficient data were assigned to a cluster based on population-weighted latitudinal distance to a cluster's mean latitude. Results: We identified five clusters, or epidemic zones, from 111 countries with sufficient data. The influenza burden in epidemic zones A and B was consistent with a northern hemisphere pattern, with most influenza detections occurring during October-April (A) and September-March (B), while epidemic zones D and E were characterized by southern hemisphere-like seasonal timing, with most influenza burden occurring during May-November. Epidemic zone C had most influenza burden occurring during September-March; most countries assigned to this cluster were in the tropics. Conclusion: Epidemic zones may serve as a useful tool to strengthen and optimize influenza surveillance for global health decision-making (e.g., during vaccine strain composition discussions) and to guide country preparedness efforts for seasonal influenza epidemics, including the timing of enhanced surveillance, as well as the procurement and delivery of vaccines and antivirals.

9
Hemagglutination inhibition and alternate serologic responses following Influenza A(H3N2) virus infection

Chen, B.; Zambrana, J. V.; Shotwell, A.; Sanchez, N.; Plazaola, M.; Ojeda, S.; Lopez, R.; Stadlbauer, D.; Kuan, G.; Balmaseda, A.; Krammer, F.; Gordon, A.

2026-04-22 infectious diseases 10.64898/2026.04.21.26351404 medRxiv
Top 0.8%
1.8%
Show abstract

Background: Although the hemagglutination inhibition (HAI) titer remains the gold standard correlate of protection against influenza, it does not fully capture the broader antibody responses that contribute to immunity. Methods: We analyzed immune responses in paired pre-infection and convalescent sera from 306 RT-PCR-confirmed A/H3N2 infections from two household studies (2014-18) in Managua, Nicaragua. Antibody responses were measured by HAI and enzyme-linked immunosorbent assays (ELISAs) against full-length hemagglutinin (HA), the HA stalk, and neuraminidase (NA). Participants were classified as HAI responders ([&ge;]4-fold HAI rise), alternate responders (no HAI rise but [&ge;]4-fold boost in [&ge;]1 ELISA), or no-response individuals (no [&ge;]4-fold rise in any assay). We compared demographic, clinical, and pre-infection antibody characteristics across these groups. We also analyzed predictors of an NA response. Results: Overall, 77% of participants had HAI seroconversion or a 4-fold rise. Among the 23% HAI non-responders, 62% had alternate antibody responses. No-response individuals had the highest pre-infection HAI and full-length HA titers (p < 0.0001), the lowest viral loads, and the fewest fever or influenza like illness (ILI) symptoms (p < 0.01). An NA response was more common among symptomatic individuals (p = 0.0483) and those with low or high baseline NA titers. Conclusions: High baseline HAI titers can limit detectable 4-fold rises and are associated with milder illness. Evaluating additional immune responses may capture a more complete picture of the host response to infection, thereby improving surveillance and informing vaccine development. Keywords: Influenza A/H3N2; Hemagglutination inhibition (HAI); Neuraminidase antibodies; symptomatic vs asymptomatic infection; correlates of protection.

10
Hemodialysis Prescribing Patterns of Hospital & Satellite Centres: An Institution-Wide Observational Study

Melville, S.; MacKinnon, M.; Michaud, J.

2026-04-22 nephrology 10.64898/2026.04.20.26351284 medRxiv
Top 0.8%
1.8%
Show abstract

BackgroundLife-sustaining hemodialysis (HD) is onerous for patients, especially those with multiple co-morbidities and advanced age. A standard HD prescription is 720 minutes per week. Alternative HD regiments have been proposed in attempt to maintain quality of life (QOL). Studies are needed to investigate the efficacy and safety of less frequent HD prescriptions in this population. This is an institution-wide observational study in New Brunswick, Canada to compare HD prescriptions and the impact on QOL and mortality. ObjectiveThe purpose of this study is to assess the current HD prescribing practices at a provincial healthcare institution in relation to patient QOL. DesignProspective Observational Study. SettingSingle centre hospital and satellite hemodialysis units. PatientsVoluntarily consented patients undergoing in-centre hemodialysis treatment. MeasurementsObservational clinical data was collected for each study participant from their hospital and dialysis electronic medical records. The KDQOL-36TM questionnaire was used to assess patient-reported quality of life at the time of consent. MethodsAdults undergoing in-centre or satellite site HD for at least 3 months were eligible to participate. Consenting patient participants were grouped by HD prescription whether they were prescribed 720 minutes or more per week or less than 720 minutes per week. All participants completed the KDQOL-36 TM questionnaire to estimate QOL and groups were compared using the Mann-Whitney U statistical test. Emergency department visits, hospitalizations, and mortality were analyzed using a negative binomial regression or a logistic regression. ResultsWe enrolled 140 patient participants; 41 were undergoing less than 720 minutes per week of HD and 99 were undergoing 720 minutes or more of HD per week. Patients who were undergoing less than 720 minutes per week of HD were older [Median (IQR): 76 (72- 81) yrs. vs. 64 (55 - 75) yrs.; p < 0.001], had higher median (IQR) QOL scores on the Symptoms/ Problems List scale on the KDQOL-36 TM questionnaire [79.2 (70.8 - 88.5 vs. 70.8 (62.5 - 81.3); p = 0.0022], and were less likely to present to the emergency department (incident rate ratio 0.52, 95% confidence interval [CI] 0.33-0.81). Mortality was similar between groups, even when adjusted for age and comorbidity score (odds ratio 1.62, 95% CI 0.59-4.49). LimitationsPatient participant enrollment was limited by the single centre nature of this study. As this was an observational study, we did not account for how long the patients had been prescribed less than 720 minutes of hemodialysis. We did not include a frailty assessment of the study participants. A higher number of study participants may have identified significant trends in mortality. ConclusionsThe results of this study show that patients undergoing less than 720 minutes of weekly HD had a higher QOL score for the KDQOL-36 TM Symptoms/ Problems List scale, were less frequently in the emergency department and were not more likely to die than patients undergoing 720 minutes or more of weekly HD. Further studies are required to assess the feasibility and safety of a conservative model of HD prescribing to improve QOL of patients with palliative care treatment goals.

11
Differential effects of fenofibrate and fenofibric acid on the regulation of liver endothelial permeability

Luty, M. T.; Borah, D.; Szafranska, K.; Giergiel, M.; Trzos, K.; McCourt, P.; Lekka, M.; Kotlinowski, J.; Zapotoczny, B.

2026-04-20 cell biology 10.64898/2026.04.16.718907 medRxiv
Top 0.8%
1.8%
Show abstract

Background and AimsFenofibrate is widely prescribed for hyperlipidaemia and has been associated with rare but severe cases of drug-induced liver injury (DILI), yet its effects on liver sinusoidal endothelial cells (LSECs) remain to be investigated. LSECs maintain a highly permeable specialized sinusoidal barrier characterized by transcellular pores (fenestrations), regulating the bidirectional transfer of circulating compounds to and from the hepatocytes. As drug-induced alterations in fenestration architecture could influence xenobiotic access to hepatocytes, these changes may modulate pathways associated with DILI. Understanding the effects of fenofibrate on LSEC ultrastructure may therefore provide insights into previously underexplored endothelial contributions to hepatic drug responses. MethodsBoth fenofibrate and its active metabolite, fenofibric acid, were evaluated for their effects on LSEC ultrastructure, mechanical properties, and functional markers. Atomic force microscopy (AFM) and scanning electron microscopy (SEM) and were used to quantify fenestration architecture. AFM was additionally used to measure cellular mechanical properties, which were interpreted in the context of fluorescence-based quantification of cytoskeletal organization. Gene expression, viability, and cytotoxicity were assessed using PCR-based and biochemical assays. ResultsFenofibrate reduced fenestration number and porosity at both tested concentration (10, and 25 {micro}M). It also decreased the apparent Youngs modulus of LSECs, accompanied by changes in tubulin and actin architecture, without detectable cytotoxicity. In contrast, treatment with fenofibric acid did not result in significant structural or mechanical effects on LSECs, even at higher concentrations. ConclusionsTogether, these data identify LSECs as a drug-responsive hepatic cell type for fenofibrate, suggesting that LSECs could represent an underrecognized contributor to the complex, multifactorial processes underlying DILI. This work provides a framework for evaluating endothelial contributions to fenofibrate-associated liver effects in more complex models. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=105 SRC="FIGDIR/small/718907v1_ufig1.gif" ALT="Figure 1"> View larger version (51K): org.highwire.dtl.DTLVardef@1d3f60corg.highwire.dtl.DTLVardef@bea13aorg.highwire.dtl.DTLVardef@14b27d8org.highwire.dtl.DTLVardef@124e0d3_HPS_FORMAT_FIGEXP M_FIG Fenofibrate reduces LSEC fenestrations and metabolic activity at higher concentrations, while its metabolite, fenofibric acid, does not affect LSEC, regardless of its concentration. C_FIG

12
The Acceptability and Impact of the Community-Based Blood Pressure Group pilot intervention in Zimbabwe.

Mhino, F. M.; Ndanga, A.; Chivandire, T.; Sekanevana, C.; Mpandaguta, C. E.; Mwanza, T.; Mutengerere, A.; Scott, S.; Chimberengwa, P.; Dixon, J.; Ndhlovu, C. E.; Seeley, J.; Chingono, R. M. S.; Sabapathy, K.

2026-04-22 public and global health 10.64898/2026.04.20.26351307 medRxiv
Top 0.8%
1.7%
Show abstract

IntroductionOver one billion people worldwide have hypertension. In Zimbabwe, prevalence is an estimated 38%, surpassing the global average of 34%, and >50% of hypertensives are undiagnosed. The Community BP groups (Com-BP) study examined whether community groups of people living with hypertension, provided with BP machines and led by trained Facilitators could improve awareness, screening and support for those diagnosed with hypertension, to help blood pressure (BP) control. We present findings from the quantitative evaluation of the Com-BP pilot intervention. MethodsThe acceptability of the Com-BP intervention, its potential effectiveness in improving knowledge, attitudes and practices (KAP) and in reducing BP among hypertensive adults in Zimbabwe, was evaluated. Cross-sectional surveys using standardised questionnaires, and BP and Body Mass Index (BMI) assessments, were done at the start and end of the pilot intervention. Statistical evidence of difference between baseline and follow-up was examined using Wilcoxon signed-rank test for continuous data and McNemars test for categorical data. ResultsFourteen groups (seven urban and seven rural) were formed and 151 participants joined over a median of 5months. Retention in the groups was 97.9% (137/140 recruited at baseline), with approximately equal numbers from the urban and rural sites. Median age at baseline was 54 years (IQR 45-66y; min-max 30-92y) and the majority (79%, n=108) were female. Most participants (82.5%, n=113) rated their experience of the group sessions as excellent. The proportions of participants with changes in KAP from baseline to endline were as follows: 45.3% (n=62) to 81.0% (n=111) (p=0.004) able to identify at least two pre-disposing factors for hypertension; 65.0% (n=89) to 77.4% (n=106) (p=0.02) reporting [&ge;]1day of vigorous physical activity/week; 28.5% (n=39) to 13.9% (n=19) (p=0.001) reporting salt added to meals at the table. There was no statistical evidence of any difference in medication adherence, p=0.06. The proportion of participants with uncontrolled hypertension was 58.1% (n=79) at baseline and reduced to 31.8% (n=43) at follow-up (p<0.001). DiscussionCommunity groups for improving awareness, detection and support are acceptable and led to improvements in self-reported KAP and prevalence of uncontrolled BP. Further research on the sustainability and impact of the intervention is required.

13
The Impact of Malnutrition on Host Responses to Severe Infection in Adults: A Multicenter Analysis from Uganda

Conte Cortez Martins, G.; Lutwama, J. J.; Owor, N.; Namulondo, J.; Ross, J. E.; Lu, X.; Asasira, I.; Kiyingi, T.; Nsereko, C.; Nsubuga, J. B.; Shinyale, J.; Kiwubeyi, M.; Nankwanga, R.; Nie, K.; Reynolds, S. J.; Kayiwa, J.; Kim-Schulze, S.; Bakamutumaho, B.; Cummings, M.

2026-04-22 public and global health 10.64898/2026.04.20.26351315 medRxiv
Top 1%
1.5%
Show abstract

ObjectiveStudies of nutritional status and host responses during severe and critical illness have focused predominantly on obesity; in contrast, the relationship between undernutrition, host responses, and clinical outcomes in adults hospitalized with severe infection remains poorly defined. We sought to determine whether severe undernutrition is associated with distinct host responses and clinical outcomes in adults hospitalized with severe infection. DesignProspective cohort study. SettingTwo public referral hospitals in Uganda. PatientsNon-pregnant adults ([&ge;]18 yr) hospitalized with severe, undifferentiated infection. InterventionsNone. Measurements and Main ResultsWe analyzed clinical data and serum Olink proteomic data from 432 participants (median age, 45 yr [IQR, 31-57 yr]; 44% male). Overall, 213 participants (49%) met prespecified criteria for undernutrition, including 52 (12%) with severe undernutrition. Clinically, severe undernutrition was associated with HIV coinfection, microbiologically diagnosed tuberculosis, greater physiological instability, and higher mortality. After adjustment for age, sex, illness duration, study site, and HIV, malaria, and tuberculosis coinfection, severe undernutrition was associated with higher expression of proteins involved in pro-inflammatory immune signaling, endothelial and vascular remodeling, hypoxia and oxidative stress responses, and extracellular matrix remodeling, together with lower expression of proteins linked to growth signaling, anticoagulant regulation, and lipid homeostasis. ConclusionsSevere undernutrition is associated with a distinct high-risk clinical phenotype and biologic signature in adults hospitalized with severe infection. These findings suggest that undernutrition may potentiate key domains of sepsis pathobiology, with implications for strengthening nutritional support and informing host-directed treatment strategies in low- and middle-income countries where malnutrition is common. Key PointsO_ST_ABSQuestionC_ST_ABSHow does undernutrition influence immune, metabolic, and endothelial responses to severe infection in adults? FindingsIn this multicenter cohort study of 432 adults hospitalized with severe infection in Uganda, severe undernutrition was associated with greater physiologic instability, higher mortality, and a distinct proteomic host-response profile. Adults with severe undernutrition exhibited a proteomic signature characterized by pro-inflammatory immune signaling, endothelial and extracellular matrix remodeling, and hypoxia and oxidative stress responses, together with lower expression of proteins involved in growth signaling, anticoagulant regulation, and lipid homeostasis. MeaningSevere undernutrition is associated with a distinct high-risk clinical and biologic phenotype during severe infection, with implications for nutritional support, risk stratification, and host-directed therapeutic strategies, particularly in low- and middle-income countries.

14
Single-Nephron Dynamics Across Chronic Kidney Disease Stages in Overt Diabetic Nephropathy

Miura, A.; Okabe, M.; Okabayashi, Y.; Sasaki, T.; Haruhara, K.; Tsuboi, N.; Yokoo, T.

2026-04-23 nephrology 10.64898/2026.04.21.26351385 medRxiv
Top 1%
1.5%
Show abstract

Background: Single-nephron glomerular filtration rate (GFR) represents a nephron-level functional index that may reveal key pathophysiological mechanisms driving progression in patients with diabetic nephropathy. However, its clinical relevance remains incompletely understood. This cross-sectional study assessed single-nephron estimated GFR (eGFR) across different chronic kidney disease (CKD) stages in patients with advanced diabetic nephropathy. Methods: Nephron number was estimated as the number of nonglobally sclerotic glomeruli per kidney using computed tomography-derived cortical volume combined with biopsy stereology. Single-nephron eGFR was calculated by dividing eGFR by the nephron number of both kidneys. Patients were stratified according to CKD stage at kidney biopsy. Associations between CKD stages and single-nephron eGFR were evaluated using multivariable linear regression models adjusted for age, sex, urinary protein excretion, and eGFR. Results: The study included 105 patients with biopsy-proven diabetic nephropathy and overt proteinuria (median age 59 years, 83% male, HbA1c 6.6%, 57% had nephrotic range proteinuria). The percentage of globally sclerotic glomeruli, mesangial expansion score, and prevalence of nodular lesions increased significantly with advancing CKD stage. Median nephron number declined from 529,178 to 224,458 per kidney, whereas glomerular volume remained constant. Single-nephron eGFR decreased markedly with CKD stage and remained significantly inversely associated with CKD stage after adjustment for clinicopathologic covariates (P for trend <0.001). Conclusion: In overt diabetic nephropathy, single-nephron eGFR decreased with advancing CKD stage, despite relatively preserved glomerular volume. At this stage of disease, structural alterations specific to diabetic nephropathy may impair effective single-nephron filtration capacity.

15
Stakeholder perspectives on the use of enhanced mobile phone capabilities for public health surveillance for non-communicable disease risk factors: A qualitative study

Mwaka, E. S.; Nabukenya, S.; Kasiita, V.; Bagenda, G.; Rutebemberwa, E.; Ali, J.; Gibson, D.

2026-04-23 health informatics 10.64898/2026.04.22.26351443 medRxiv
Top 1%
1.3%
Show abstract

Background: Mobile phone-based tools are increasingly used to collect data on non-communicable disease (NCD) risk factors, particularly in low-resource settings where traditional data collection systems face operational and infrastructural constraints. This study examined stakeholder perspectives on the use of enhanced mobile phone-based capabilities to support the collection of public health surveillance data on NCD risk factors in low-resource settings. Methods: An exploratory qualitative study was conducted between November 2022 and July 2023. Twenty in-depth interviews were conducted with public health specialists, ethicists, NCD researchers, health informaticians, and policy makers in Uganda. Thematic analysis was used to interpret the results. Results: Four themes emerged from the data, including benefits of using mobile phone capabilities for NCD risk factor data collection; ethical, legal, and social implications; perceived challenges of using such mobile phone capabilities; and proposed solutions to improve the utility of phone-based capabilities in data collection on NCD risk factors. Participants recognized the potential of mobile technologies to improve data collection efficiency and expand access to hard-to-reach populations. However, concerns emerged regarding inadequate informed consent, risks to privacy and confidentiality, unclear data ownership, and vulnerabilities created by inconsistent enforcement of data protection laws. Social concerns included low digital literacy, unequal access to mobile devices, and fear of stigmatization. Participants emphasized the need for transparent communication, robust data governance, and community engagement. Conclusion: Mobile phone-based systems can strengthen the collection of NCD risk factor data in low-resource settings; however, their benefits depend on addressing key ethical, legal, and social challenges. To ensure responsible deployment, digital health initiatives must prioritize participant autonomy, data protection, equity, and trust building. Integrating contextualized ethical, legal, and social considerations into design and policy frameworks will be essential to leveraging mobile technologies in ways that support inclusive and effective NCD prevention and control.

16
Data Resource Profile: EST-Health-30

Reisberg, S.; Oja, M.; Mooses, K.; Tamm, S.; Sild, A.; Talvik, H.-A.; Laur, S.; Kolde, R.; Vilo, J.

2026-04-24 epidemiology 10.64898/2026.04.21.26351087 medRxiv
Top 1%
1.2%
Show abstract

Background: The increasing availability of routinely collected health data offers new opportunities for population-level research, yet access to comprehensive, linked, and standardised datasets remains limited. We describe EST-Health-30, a large-scale, population-representative health data resource from Estonia. Methods: EST-Health-30 comprises a random 30% sample of the Estonian population (~500,000 individuals), with longitudinal data from 2012 to 2024 and annual updates planned through 2026. Individual-level records are linked across five nationwide databases, including electronic health records, health insurance claims, prescription data, cancer registry, and cause of death records. A privacy-preserving hashing approach ensures consistent cohort inclusion over time while maintaining pseudonymisation. All data are harmonised to the Observational Medical Outcomes Partnership (OMOP) Common Data Model (version 5.4) using international standard vocabularies. Data quality was assessed using established OMOP-based validation frameworks. Results: The dataset contains rich multimodal information on diagnoses, procedures, laboratory measurements, prescriptions, free-text clinical notes, healthcare utilisation, and costs, with high population coverage and longitudinal depth. Data quality assessment showed high completeness and consistency, with 99.2% of applicable checks passing. The age-sex distribution closely reflects the national population, supporting representativeness, though coverage is marginally below the target 30% (29.2%), primarily attributable to recent immigrants without health system contact. The dataset enables construction of detailed clinical cohorts, analysis of disease trajectories, and evaluation of healthcare utilisation and outcomes across the life course. Conclusions: EST-Health-30 is a comprehensive, standardised, and population-representative real-world data resource that supports epidemiological, clinical, and methodological research. Its alignment with the OMOP CDM facilitates reproducible analytics and participation in international federated research networks, while secure access infrastructure ensures compliance with data protection regulations.

17
Mapping social profiles in childhood and adolescence: associations with cognition and brain structure

Trachtenberg, E.; Mousley, A.; Jelen, M.; Astle, D.

2026-04-21 neuroscience 10.64898/2026.04.20.719698 medRxiv
Top 1%
1.2%
Show abstract

ObjectiveSocial difficulties are transdiagnostic in childhood, but their heterogeneity is poorly characterised and rarely treated as a primary neurodevelopmental phenotype. This matters because childhood and adolescence are sensitive periods for peer relationships and brain development. We used data-driven modelling and non-linear mapping to derive social profiles and test their clinical, cognitive, and neural correlates. MethodsParticipants were 992 children aged 5-18 years from CALM (Mage = 9.6). Social items from the SDQ, CCC-2, and Conners-3 were modelled using a regularised partial correlation network to derive core social dimensions. A self-organising map captured graded social profiles. Simulated archetypes, SVM-based island identification, and permutation testing defined profile regions and centroid-distance scores. Profiles were related to referral, diagnosis, cognition, BRIEF indices, and T1-derived MIND network structure in an MRI subsample (n = 431). ResultsWe identified four profiles: social engagement, friendship difficulties, social withdrawal, and peer victimisation. Profile expression tracked variation in referral and diagnostic pathways. Social withdrawal showed the clearest disadvantage across cognitive domains, whereas social engagement was associated with fewer executive function difficulties across BRIEF indices. MIND strength components covaried with profile expression (a significant PLS latent variable, p = 0.02), with covariance strongest for social withdrawal and peer victimisation. ConclusionsChildhood social functioning organises graded signatures that relate to clinically relevant pathways, cognitive and executive outcomes, and brain structure. Profiling social signatures provides a scalable framework for identifying social need beyond diagnostic categories, motivating studies to test directionality and improve developmental outcomes.

18
Comparing prognostic performance and reasoning between large language models and physicians

Gjertsen, M.; Yoon, W.; Afshar, M.; Temte, B.; Leding, B.; Halliday, S.; Bradley, K.; Kim, J.; Mitchell, J.; Sanders, A. K.; Croxford, E. L.; Caskey, J.; Churpek, M. M.; Mayampurath, A.; Gao, Y.; Miller, T.; Kruser, J. M.

2026-04-25 intensive care and critical care medicine 10.64898/2026.04.17.26350898 medRxiv
Top 1%
1.1%
Show abstract

Importance: Physicians routinely prognosticate to guide care delivery and shared decision making, particularly when caring for patients with critical illnesses. Yet, these physician estimates are prone to inaccuracy and uncertainty. Artificial intelligence, including large language models (LLMs), show promise in supporting or improving this prognostication. However, the performance of contemporary LLMs in prognosticating for the heterogeneous population of critically ill patients remains poorly understood. Objective: To characterize and compare the performance of LLMs and physicians when predicting 6-month mortality for hospitalized adults who survived critical illness. Design: Embedded mixed methods study with elicitation and comparison of prognostic estimates and reasoning from LLMs and practicing physicians. Setting: The publicly available, deidentified Medical Information Mart for Intensive Care (MIMIC)-IV v2.2 dataset. Participants: We randomly selected 100 hospitalizations of adult survivors of critical illness. Four contemporary LLMs (Open AI GPT-4o, o3- and o4-mini, and DeepSeek-R1) and 7 physicians provided independent prognostic estimates for each case (1,100 total estimates; 400 LLM and 700 physician). Main outcomes and measures: For each case, LLMs and physicians used the hospital discharge summary and demographics to predict 6-month mortality (yes/no) and provide their reasoning (free text). We assessed prognostic performance using accuracy, sensitivity, and specificity, and used inductive, qualitative content analysis to characterize reasonings. Results: Mean physician accuracy for predicting mortality was 70.1% (95% CI 63.7-76.4%), with sensitivity of 59.7% (95% CI 50.6-68.8%) and specificity of 80.6% (95% CI 71.7-88.2%). The top-performing LLM (OpenAI o4-mini) accuracy was 78.0% (95% CI 70.0-86.0%), with sensitivity of 80.0% (95% CI 67.4-90.2%) and specificity of 76.0% (95% CI 63.3-88.0%). The difference between mean physician and top-performing LLM accuracy was not statistically significant (p = 0.5). Qualitative analysis revealed similar patterns in LLM and physician expressed reasoning, except that physicians regularly and explicitly reported uncertainty while LLMs did not. Conclusion and Relevance: In this study, LLMs and physicians achieved comparable, moderate performance in predicting 6-month mortality after critical illness, with similar patterns in expressed reasoning. Our findings suggest LLMs could be used to support prognostication in clinical practice but also raise safety concerns due to the lack of LLM uncertainty expression.

19
Retrospective analysis of clinical and environmental genotyping reveals persistence of Pseudomonas aeruginosa in the water system of a large tertiary children's hospital in England

Sheth, E.; Case, L.; Shaw, F.; Dwyer, N.; Poland, J.; Wan, Y.; Larru, B.

2026-04-24 infectious diseases 10.64898/2026.04.23.26351604 medRxiv
Top 2%
0.9%
Show abstract

Background Pseudomonas aeruginosa is a major cause of healthcare-associated infections in paediatric settings, where its persistence in moist environments such as hospital water and wastewater systems poses a particular risk to neonates and immunocompromised children. Aim The aim of this study was to showcase the long-term survival and transmission of P. aeruginosa in a large tertiary children's hospital in England which is crucial to develop strategies for water-safe care. Methods Environmental P. aeruginosa isolates were collected from taps, sinks, showers, and baths in augmented care areas of a 330-bed tertiary children's hospital built to NHS water-safety standards. Clinical isolates were classified as invasive (blood, cerebrospinal fluid, and bronchoalveolar lavage) or non-invasive (respiratory, urine, ear, abdominal, and rectal surveillance). Variable number tandem repeat (VNTR) profiles and metadata were extracted from PDF reports, de-identified, deduplicated, and curated using Python and R. Findings This retrospective study analysed nine-locus VNTR profiles of 457 P. aeruginosa isolates submitted to the UK Health Security Agency from a large tertiary children's hospital, identifying 56 isolate clusters (each with [&ge;]2 isolates), of which 19 (34%) contained at least one invasive isolate. The most persistent cluster (Cluster 1, n=20) spanned from July 2016 to September 2024, containing environmental and clinical (invasive and non-invasive) isolates. Conclusion These findings demonstrate long-term persistence of certain genotypes and temporal overlap between environmental and clinical isolates, highlighting the difficulty in detecting and eradicating P. aeruginosa in hospital water and wastewater systems and reinforcing the need for continuous rigorous water system controls.

20
Tongue swab-based Targeted Universal Tuberculosis Testing in people living with HIV in KwaZulu-Natal, South Africa

Olson, A. M.; Wood, R. C.; Sithole, N.; Govender, I.; Grant, A. D.; Smit, T.; David, A.; Stevens, W.; Scott, L.; Drain, P. K.; Cangelosi, G. A.; Shapiro, A. E.

2026-04-25 public and global health 10.64898/2026.04.17.26351084 medRxiv
Top 2%
0.9%
Show abstract

Background. Targeted Universal Tuberculosis Testing (TUTT) may increase tuberculosis (TB) case detection by including people who are not actively seeking TB care but are at high risk of the disease. Non-invasive tongue swab (TS) testing may facilitate TUTT. We evaluated two TS testing protocols in people with HIV (PWH) tested irrespective of TB symptoms. Methods. Study staff collected Copan FLOQSwab and Medline foam swab specimens, alongside urine and sputa, from PWH, most of whom were presenting for antiretroviral therapy initiation at primary healthcare clinics in KwaZulu-Natal, South Africa. FLOQSwabs were tested by sequence-specific magnetic capture (SSMaC) with qPCR (FLOQSwab-SSMaC). Foam swabs were tested by centrifuge-sedimentation and high-volume qPCR (foam-sedimentation). Urine lipoarabinomannan was detected using LF-LAM. The extended microbiological reference standard (eMRS) comprised any positive result on Xpert Ultra and/or liquid culture of sputum. Results. We enrolled 251 participants (median age 34 years, 56% female, 67% with self-reported TB symptoms). Participants had a median CD4 count of 347 cells/ul, and 16% (40/251) had prior TB. FLOQSwab-SSMaC was 43% sensitive (13/30) and 100% specific (131/131) relative to eMRS. Foam-sedimentation was 47% (9/29) sensitive and 100% (176/176) specific. Sensitivity increased to 52% (FLOQSwab-SSMaC) and 50% (foam-sedimentation) when sputum Xpert Ultra Trace positive results were excluded from eMRS. TS was more sensitive than urine LAM, and both sample types were more sensitive when CD4 counts were below 200. Discussion. TS testing detected about half of PWH with TB and outperformed urine LAM within this population, including among PWH with low CD4 counts.